3,084 research outputs found

    Cancer biomarker development from basic science to clinical practice

    Get PDF
    The amount of published literature on biomarkers has exponentially increased over the last two decades. Cancer biomarkers are molecules that are either part of tumour cells or secreted by tumour cells. Biomarkers can be used for diagnosing cancer (tumour versus normal and differentiation of subtypes), prognosticating patients (progression free survival and overall survival) and predicting response to therapy. However, very few biomarkers are currently used in clinical practice compared to the unprecedented discovery rate. Some of the examples are: carcino-embryonic antigen (CEA) for colon cancer; prostate specific antigen (PSA) for prostate; and estrogen receptor (ER), progesterone receptor (PR) and HER2 for breast cancer. Cancer biomarkers passes through a series of phases before they are used in clinical practice. First phase in biomarker development is identification of biomarkers which involve discovery, demonstration and qualification. This is followed by validation phase, which includes verification, prioritisation and initial validation. More large-scale and outcome-oriented validation studies expedite the clinical translation of biomarkers by providing a strong ‘evidence base’. The final phase in biomarker development is the routine clinical use of biomarker. In summary, careful identification of biomarkers and then validation in well-designed retrospective and prospective studies is a systematic strategy for developing clinically useful biomarkers

    Supersoluble groups of Wielandt length two

    Get PDF

    Exploiting peer group concept for adaptive and highly available services

    Full text link
    This paper presents a prototype for redundant, highly available and fault tolerant peer to peer framework for data management. Peer to peer computing is gaining importance due to its flexible organization, lack of central authority, distribution of functionality to participating nodes and ability to utilize unused computational resources. Emergence of GRID computing has provided much needed infrastructure and administrative domain for peer to peer computing. The components of this framework exploit peer group concept to scope service and information search, arrange services and information in a coherent manner, provide selective redundancy and ensure availability in face of failure and high load conditions. A prototype system has been implemented using JXTA peer to peer technology and XML is used for service description and interfaces, allowing peers to communicate with services implemented in various platforms including web services and JINI services. It utilizes code mobility to achieve role interchange among services and ensure dynamic group membership. Security is ensured by using Public Key Infrastructure (PKI) to implement group level security policies for membership and service access.Comment: The Paper Consists of 5 pages, 6 figures submitted in Computing in High Energy and Nuclear Physics, 24-28 March 2003 La Jolla California. CHEP0

    Syncretic Architecture of Fatehpur Sikri: a Symbol of Composite Culture

    Full text link
    The amalgamation of native Indians and Muslim immigrants eventuated in a prolific way in the realm of literature, art, music, technology and especially in architecture, which reached at its zenith during Mughal period. Akbar, the great Mughal, is greatly recognized for his syncretism and religious tolerance. With the power of his influential personality and eclectic approach, he unified the various artistic traditions and architectural styles in the design of his new capital city, Fatehpur Sikri. Although, the architectural forms and construction techniques involved in the design of city had already been incorporated since the arrival of Islam in Indian subcontinent, but their synthesis reached at its zenith at Fatehpur Sikri and thus traditionally rich and fanciful Indian style was merged with the lightness and simplicity of Islamic style. This paper will focus on the unique intermingling of two entirely different styles like their cultures which were born in different regions and with different approaches. This fusion developed a new style in architecture besides several other aspects of life in India

    Interpretation of Modified Electromagnetic Theory and Maxwell's Equations on the Basis of Charge Variation

    Get PDF
    Electromagnetic waves are the analytical solutions of Maxwell's equations that represent one of the most elegant and concise ways to state the fundamentals of electricity and magnetism. From them one can develop most of the working relationships in the electric and magnetic fields. Considering deeply the effect of charge variation in Maxwell’s equations for time varying electric and magnetic fields of charges in moving inertial frame, the magnitude of charge particles vary according to Asif’s equation of charge variation. Consequently the Maxwell’s equations give different results to an observer measuring at rest. This research paper explained the effect of charge variation in Classical Electromagnetic theory, Maxwell’s equations, Coulumb’s law, Lorentz force law when we are referring to any inertial frame.DOI:http://dx.doi.org/10.11591/ijece.v4i2.535

    Design and Code Optimization for Systems with Next-generation Racetrack Memories

    Get PDF
    With the rise of computationally expensive application domains such as machine learning, genomics, and fluids simulation, the quest for performance and energy-efficient computing has gained unprecedented momentum. The significant increase in computing and memory devices in modern systems has resulted in an unsustainable surge in energy consumption, a substantial portion of which is attributed to the memory system. The scaling of conventional memory technologies and their suitability for the next-generation system is also questionable. This has led to the emergence and rise of nonvolatile memory ( NVM ) technologies. Today, in different development stages, several NVM technologies are competing for their rapid access to the market. Racetrack memory ( RTM ) is one such nonvolatile memory technology that promises SRAM -comparable latency, reduced energy consumption, and unprecedented density compared to other technologies. However, racetrack memory ( RTM ) is sequential in nature, i.e., data in an RTM cell needs to be shifted to an access port before it can be accessed. These shift operations incur performance and energy penalties. An ideal RTM , requiring at most one shift per access, can easily outperform SRAM . However, in the worst-cast shifting scenario, RTM can be an order of magnitude slower than SRAM . This thesis presents an overview of the RTM device physics, its evolution, strengths and challenges, and its application in the memory subsystem. We develop tools that allow the programmability and modeling of RTM -based systems. For shifts minimization, we propose a set of techniques including optimal, near-optimal, and evolutionary algorithms for efficient scalar and instruction placement in RTMs . For array accesses, we explore schedule and layout transformations that eliminate the longer overhead shifts in RTMs . We present an automatic compilation framework that analyzes static control flow programs and transforms the loop traversal order and memory layout to maximize accesses to consecutive RTM locations and minimize shifts. We develop a simulation framework called RTSim that models various RTM parameters and enables accurate architectural level simulation. Finally, to demonstrate the RTM potential in non-Von-Neumann in-memory computing paradigms, we exploit its device attributes to implement logic and arithmetic operations. As a concrete use-case, we implement an entire hyperdimensional computing framework in RTM to accelerate the language recognition problem. Our evaluation shows considerable performance and energy improvements compared to conventional Von-Neumann models and state-of-the-art accelerators

    Ex vivo dermis microdialysis: A tool for bioequivalence testing of topical dermatological drug product (Demonstration of proof of concept and testing)

    Get PDF
    Clinical response to most topical dermatological drug products (TDDP) depends on the availability of the drug in the dermis. Dermal Microdialysis (dMD) is a sampling technique that permits measuring the concentration of a drug over time, in vivo, directly into the target tissue, the dermis. The pharmacokinetic parameters obtained from such studies may help to optimize the development of TDDP and potentially can be applied to the assessment of TDDP bioequivalence. However, these studies require several hours or even days of continuous sampling that makes it often stressful and unpractical for human subjects as well as animals. The goal of this dissertation was to develop a reliable and consistent ex-vivo dMD method to complement and assist in vivo dMD experiments. In the first part of the project, we have developed and tested the ex-vivo dermal microdialysis method on two different experimental skin models using freshly excised porcine skin. Porcine skin was selected due to the close resemblance to human skin, it is advantageous in terms of availability and expense. For the microdialysis study, in-house dermal microdialysis probes were conveniently manufactured with controlled specifications and the microdialysis recovery process was screened with an in vitro setup to match the intended use. The in vitro microdialysis method was optimized for probe specification, analyte suitability, perfusion flow rate, and perfusate composition. A maximized, rapid, and steady recovery was demonstrated within a wide range of concentrations. For the ex vivo dermal microdialysis study, the two different skin models developed were: M1-- Full-thickness skin (≈0.25 cm) without subcutaneous fat layer placed on a hydrated 0.5 cm cellulose backing support, and M2 -- Full-thickness skin with subcutaneous fat layer (total thickness = 1.0 cm) placed directly on an aluminum boat, avoiding any kind of hydration. Both setups were tested on TDDP cream and gel of metronidazole (MTZ) for which both in vivo and IVPT data are available for comparison. The two different formulations, Metronidazole cream and gel, were compared side-by-side for the rate and extent of delivery to the dermis. The latter skin model was found suitable, manifesting data comparable to the available data from in vivo pig and IVPT (human cadaver) study. The selection of the best-fit-model was based on the comparative bioavailability response from the negative control, Metronidazole gel, resulting in a lower bioavailability profile (90% CI). Using this suitable ex vivo dMD model, site-specific results of the drug can be conveniently monitored in the dermis leading to dose-dependent rate and extent in concentration-time exposure. The M2 model was further tested for the effect of temperature of the skin on the bioavailability profile of the drug. As reported in several pieces of literature, an increase in dermal exposure is expected with the rise in skin temperature. Superficial addition of heat to the skin was not feasible as it may change the thermodynamics of the formulation leading to alteration in permeation kinetics. Thus, the physiological temperature of ex vivo pig skin explant was achieved by providing continuous heat from the ventral side using a closed water-bath system. The process of supplementation of temperature did not impact the bioavailability profile, rather unfavorable damages to the skin microstructure due to thermal degradation was observed. Further studies with the proposed model ex vivo dMD model were conducted at ambient lab temperature. Yet another aspect of the proposed model was to test its capability to determine bioequivalence (BE). The potential of using this model for BE testing was validated by comparing the BA of MetroCream with its USFDA-approved generic Metronidazole 0.75% cream. The overall BE estimation resulted in an ln-AUC of 91.65 (80.93, 104.88) and an ln-Cmax value of 87.56 (74.87,102.39). The fact that reference and test formulations can be tested simultaneously at multiple sites on a skin sample harvested from a single animal subject reduces the burden of vii-inter-subject variability. The experimental population size required to establish bioequivalence for topically applied drugs can be reduced. Yet another aspect of the study was to design a mathematical model, based on the ex vivo dMD findings, to extend its predictability to in vivo outcomes. A first-of-its-kind unit impulse response method was applied in dermis tissue of skin explant to measure the absorption independent elimination parameters. The estimated parameters were employed to calculate the cumulative absorption of the drug from different topical formulations. The absorption profile of the developed model was time-scaled and absolute-scaled with a permeation scaling factor to map with available literature data on in vivo pig. The levy point-to-point regression coefficient was employed to predict the in vivo PK profile thereafter. With the current intervention, we propose a mathematical possibility to predict in vivo outcomes for a given topical dose. The studies presented were limited internal predictions only, and external validation with a different set of data is yet to be performed and to be undertaken at another time. Overall, the studies presented in this work provides a foundation stone for an elaborate field of work that can be undertaken to minimize the use of animal in pharmacokinetic evaluations for topical and transdermal products. Ex vivo dermal microdialysis warrants testing on a plethora of drug molecules of different polarities to decide on the future of the technique. Regardless, the technique holds unmet potential and needs to be nurtured over time
    • 

    corecore